692 research outputs found
Improving End-Use Load Modeling Using Machine Learning and Smart Meter Data
An accurate representation of the voltage-dependent, time-varying energy consumption of end-use electric loads is essential for the operation of modern distribution automation (DA) schemes. Volt-var optimization (VVO), a DA scheme which can decrease energy consumption and peak demand, often leverages electric network models and power flow results to inform control decisions, making it sensitive to errors in load models. End-use load modeling can be improved with additional measurements from advanced metering infrastructure (AMI). This paper presents two novel machine learning algorithms for creating data-driven, time-varying load models for use with DA technologies such as VVO. The first algorithm uses AMI data, k-means clustering, and least-squares optimization to create predictive load models for individual electric customers. The second algorithm uses deep learning (via a convolution-based recurrent neural network) to incorporate additional data and increase model accuracy. The improved accuracy of the load models for both algorithms is validated through simulation
Adaptation of EPEC-EM™ Curriculum in a Residency with Asynchronous Learning
Objective: The Education in Palliative and End-of-life Care for Emergency Medicine Project (EPEC™-EM) is a comprehensive curriculum in palliative and end-of-life care for emergency providers. We assessed the adaptation of this course to an EM residency program using synchronous and asynchronous learning.Methods: Curriculum adaptation followed Kern’s standardized six-step curriculum design process. Post-graduate year (PGY) 1-4 residents were taught all EPEC™-EM cognitive domains, divided as seven synchronous and seven asynchronous modules. All synchronous modules featured large group didactic lectures and review of EPEC™-EM course materials. Asynchronous modules use only EPEC™-EM electronic course media for resident self-study. Targeted evaluation for EPEC™-EM knowledge objectives was conducted by a prospective case-control crossover study, with synchronous learning serving as the quasi-control, using validated exam tools. We compared de-identified test scores for effectiveness of learning method, using aggregate group performance means for each learning strategy.Results: Of 45 eligible residents 55% participated in a pre-test for local needs analysis, and 78% completed a post-test to measure teaching method effect. Post-test scores improved across all EPEC™-EM domains, with a mean improvement for synchronous modules of +28% (SD=9) and a mean improvement for asynchronous modules of +30% (SD=18). The aggregate mean difference between learning methods was 1.9% (95% CI -15.3, +19.0). Mean test scores of the residents who completed the post-test were: synchronous modules 77% (SD=12); asynchronous modules 83% (SD=13); all modules 80% (SD=12).Conclusion: EPEC™-EM adapted materials can improve resident knowledge of palliative medicine domains, as assessed through validated testing of course objectives. Synchronous and asynchronous learning methods appear to result in similar knowledge transfer, feasibly allowing some course content to be effectively delivered outside of large group lectures. [West J Emerg Med. 2010; 11(5):491-498.
Improvements to the APBS biomolecular solvation software suite
The Adaptive Poisson-Boltzmann Solver (APBS) software was developed to solve
the equations of continuum electrostatics for large biomolecular assemblages
that has provided impact in the study of a broad range of chemical, biological,
and biomedical applications. APBS addresses three key technology challenges for
understanding solvation and electrostatics in biomedical applications: accurate
and efficient models for biomolecular solvation and electrostatics, robust and
scalable software for applying those theories to biomolecular systems, and
mechanisms for sharing and analyzing biomolecular electrostatics data in the
scientific community. To address new research applications and advancing
computational capabilities, we have continually updated APBS and its suite of
accompanying software since its release in 2001. In this manuscript, we discuss
the models and capabilities that have recently been implemented within the APBS
software package including: a Poisson-Boltzmann analytical and a
semi-analytical solver, an optimized boundary element solver, a geometry-based
geometric flow solvation model, a graph theory based algorithm for determining
p values, and an improved web-based visualization tool for viewing
electrostatics
Recommended from our members
A Search for MeV to TeV Neutrinos from Fast Radio Bursts with IceCube
We present two searches for IceCube neutrino events coincident with 28 fast radio bursts (FRBs) and 1 repeating FRB. The first improves on a previous IceCube analysis - searching for spatial and temporal correlation of events with FRBs at energies greater than roughly 50 GeV - by increasing the effective area by an order of magnitude. The second is a search for temporal correlation of MeV neutrino events with FRBs. No significant correlation is found in either search; therefore, we set upper limits on the time-integrated neutrino flux emitted by FRBs for a range of emission timescales less than one day. These are the first limits on FRB neutrino emission at the MeV scale, and the limits set at higher energies are an order-of-magnitude improvement over those set by any neutrino telescope
Recommended from our members
Efficient propagation of systematic uncertainties from calibration to analysis with the SnowStorm method in IceCube
Efficient treatment of systematic uncertainties that depend on a large number of nuisance parameters is a persistent difficulty in particle physics and astrophysics experiments. Where low-level effects are not amenable to simple parameterization or re-weighting, analyses often rely on discrete simulation sets to quantify the effects of nuisance parameters on key analysis observables. Such methods may become computationally untenable for analyses requiring high statistics Monte Carlo with a large number of nuisance degrees of freedom, especially in cases where these degrees of freedom parameterize the shape of a continuous distribution. In this paper we present a method for treating systematic uncertainties in a computationally efficient and comprehensive manner using a single simulation set with multiple and continuously varied nuisance parameters. This method is demonstrated for the case of the depth-dependent effective dust distribution within the IceCube Neutrino Telescope
Recommended from our members
Combined sensitivity to the neutrino mass ordering with JUNO, the IceCube Upgrade, and PINGU
The ordering of the neutrino mass eigenstates is one of the fundamental open questions in neutrino physics. While current-generation neutrino oscillation experiments are able to produce moderate indications on this ordering, upcoming experiments of the next generation aim to provide conclusive evidence. In this paper we study the combined performance of the two future multi-purpose neutrino oscillation experiments JUNO and the IceCube Upgrade, which employ two very distinct and complementary routes toward the neutrino mass ordering. The approach pursued by the 20 kt medium-baseline reactor neutrino experiment JUNO consists of a careful investigation of the energy spectrum of oscillated νe produced by ten nuclear reactor cores. The IceCube Upgrade, on the other hand, which consists of seven additional densely instrumented strings deployed in the center of IceCube DeepCore, will observe large numbers of atmospheric neutrinos that have undergone oscillations affected by Earth matter. In a joint fit with both approaches, tension occurs between their preferred mass-squared differences Δm312=m32-m12 within the wrong mass ordering. In the case of JUNO and the IceCube Upgrade, this allows to exclude the wrong ordering at >5σ on a timescale of 3-7 years - even under circumstances that are unfavorable to the experiments' individual sensitivities. For PINGU, a 26-string detector array designed as a potential low-energy extension to IceCube, the inverted ordering could be excluded within 1.5 years (3 years for the normal ordering) in a joint analysis
Recommended from our members
Search for sources of astrophysical neutrinos using seven years of icecube cascade events
Low-background searches for astrophysical neutrino sources anywhere in the sky can be performed using cascade events induced by neutrinos of all flavors interacting in IceCube with energies as low as ∼1 TeV. Previously we showed that, even with just two years of data, the resulting sensitivity to sources in the southern sky is competitive with IceCube and ANTARES analyses using muon tracks induced by charge current muon neutrino interactions - especially if the neutrino emission follows a soft energy spectrum or originates from an extended angular region. Here, we extend that work by adding five more years of data, significantly improving the cascade angular resolution, and including tests for point-like or diffuse Galactic emission to which this data set is particularly well suited. For many of the signal candidates considered, this analysis is the most sensitive of any experiment to date. No significant clustering was observed, and thus many of the resulting constraints are the most stringent to date. In this paper we will describe the improvements introduced in this analysis and discuss our results in the context of other recent work in neutrino astronomy
Recommended from our members
Design and performance of the first IceAct demonstrator at the South Pole
In this paper we describe the first results of IceAct, a compact imaging air-Cherenkov telescope operating in coincidence with the IceCube Neutrino Observatory (IceCube) at the geographic South Pole. An array of IceAct telescopes (referred to as the IceAct project) is under consideration as part of the IceCube-Gen2 extension to IceCube. Surface detectors in general will be a powerful tool in IceCube-Gen2 for distinguishing astrophysical neutrinos from the dominant backgrounds of cosmic-ray induced atmospheric muons and neutrinos: the IceTop array is already in place as part of IceCube, but has a high energy threshold. Although the duty cycle will be lower for the IceAct telescopes than the present IceTop tanks, the IceAct telescopes may prove to be more effective at lowering the detection threshold for air showers. Additionally, small imaging air-Cherenkov telescopes in combination with IceTop, the deep IceCube detector or other future detector systems might improve measurements of the composition of the cosmic ray energy spectrum. In this paper we present measurements of a first 7-pixel imaging air Cherenkov telescope demonstrator, proving the capability of this technology to measure air showers at the South Pole in coincidence with IceTop and the deep IceCube detector
Recommended from our members
Time-Integrated Neutrino Source Searches with 10 Years of IceCube Data.
This Letter presents the results from pointlike neutrino source searches using ten years of IceCube data collected between April 6, 2008 and July 10, 2018. We evaluate the significance of an astrophysical signal from a pointlike source looking for an excess of clustered neutrino events with energies typically above ∼1 TeV among the background of atmospheric muons and neutrinos. We perform a full-sky scan, a search within a selected source catalog, a catalog population study, and three stacked Galactic catalog searches. The most significant point in the northern hemisphere from scanning the sky is coincident with the Seyfert II galaxy NGC 1068, which was included in the source catalog search. The excess at the coordinates of NGC 1068 is inconsistent with background expectations at the level of 2.9σ after accounting for statistical trials from the entire catalog. The combination of this result along with excesses observed at the coordinates of three other sources, including TXS 0506+056, suggests that, collectively, correlations with sources in the northern catalog are inconsistent with background at 3.3σ significance. The southern catalog is consistent with background. These results, all based on searches for a cumulative neutrino signal integrated over the 10 years of available data, motivate further study of these and similar sources, including time-dependent analyses, multimessenger correlations, and the possibility of stronger evidence with coming upgrades to the detector
Co-receptor choice by Vα14i NKT cells is driven by Th-POK expression rather than avoidance of CD8-mediated negative selection
Mouse natural killer T (NKT) cells with an invariant Vα14-Jα18 rearrangement (Vα14 invariant [Vα14i] NKT cells) are either CD4+CD8− or CD4−CD8−. Because transgenic mice with forced CD8 expression in all T cells exhibited a profound NKT cell deficit, the absence of CD8 has been attributed to negative selection. We now present evidence that CD8 does not serve as a coreceptor for CD1d recognition and that the defect in development in CD8 transgene homozygous mice is the result of a reduction in secondary T cell receptor α rearrangements. Thymocytes from mice hemizygous for the CD8 transgene have a less severe rearrangement defect and have functional CD8+ Vα14i NKT cells. Furthermore, we demonstrate that the transcription factor Th, Poxviruses and Zinc finger, and Krüppel family (Th-POK) is expressed by Vα14i NKT cells throughout their differentiation and is necessary both to silence CD8 expression and for the functional maturity of Vα14i NKT cells. We therefore suggest that Th-POK expression is required for the normal development of Vα14i NKT cells and that the absence of CD8 expression by these cells is a by-product of such expression, as opposed to the result of negative selection of CD8-expressing Vα14i NKT cells
- …